560 research outputs found

    Allocation de ressources pour la localisation non-cohérente par radar MIMO

    Get PDF
    National audienceOn considère un réseau de radars MIMO dont on cherche à déterminer la meilleure répartition de largeurs de bande et de puissance entre les différentes antennes émettrices en vue d’obtenir une certaine précision de la localisation d’une cible unique. Plus précisément, on s’intéresse ici à l’allocation optimale de bande seule, ainsi qu’à l’allocation optimale conjointement de bande et de puissance. Cette allocation s’effectue par la minimisation de la borne de Cramér-Rao. Le problème d’optimisation non-convexe obtenu est résolu par un algorithme de programmation par différence de fonctions convexes. Les résultats numériques montrent que l’allocation conjointe fournit les meilleures performances, et que d’autre part l’allocation de bande joue un rôle prépondérant dans ces performances. De plus, une borne inférieure sur la borne de Cramér-Rao optimale théorique, difficilement calculable, a également été définie, qui montre la qualité de la solution quasi-optimale

    A deep learning model for the analysis of medical reports in ICD-10 clinical coding task

    Get PDF
    The practice of assigning a uniquely identifiable and easily traceable code to pathology from medical diagnoses is an added value to the current modality of archiving health data collected to build the clinical history of each of us. Unfortunately, the enormous amount of possible pathologies and medical conditions has led to the realization of extremely wide international codifications that are difficult to consult even for a human being. This difficulty makes the practice of annotation of diagnoses with ICD-10 codes very cumbersome and rarely performed. In order to support this operation, a classification model was proposed, able to analyze medical diagnoses written in natural language and automatically assign one or more international reference codes. The model has been evaluated on a dataset released in the Spanish language for the eHealth challenge (CodiEsp) of the international conference CLEF 2020, but it could be extended to any language with latin characters. We proposed a model based on a two-step classification process based on BERT and BiLSTM. Although still far from an accuracy sufficient to do without a licensed physician opinion, the results obtained show the feasibility of the task and are a starting point for future studies in this direction

    Towards a social robot as interface for tourism recommendations

    Get PDF
    The popularity of social robots is steadily increasing, mainly due to the interesting impact they have in several application domains. In this paper, we propose the use of Pepper Robot as an interface of a recommender system for tourism. In particular, we used the robot to interact with the users and to provide them with personalized recommendations about hotels, restaurants, and points of interest in the area. The personalization mechanism encoded in the social robot relies on soft biometrics traits automatically recognized by the robot, as age and gender, user interests and personal facets. All these data are used to feed a neural network that returns as output the most suitable recommendations for the target user. To evaluate the effectiveness of the interaction driven by a social robot, we carried out a user study whose goal was to evaluate: (1) how the robot affects the perceived accuracy of the recommendations; (2) how the user experience and the engagement vary by interacting with a social robot instead of a classic web application. Even if there is a large room for improvement, mainly due to the poor speech recognizer integrated in the Pepper, the results showed that the robot can strongly attract people, thanks to its presence and interaction capabilities. These findings encouraged us in performing a larger field study to test the approach in the wild and to understand whether it can increase the acceptance of recommendations in real environments

    A study of Machine Learning models for Clinical Coding of Medical Reports at CodiEsp 2020

    Get PDF
    The task of identifying one or more diseases associated with a patient’s clinical condition is often very complex, even for doctors and specialists. This process is usually time-consuming and has to take into account different aspects of what has occurred, including symptoms elicited and previous healthcare situations. The medical diagnosis is often provided to patients in the form of written paper without any correlation with a national or international standard. Even if the WHO (World Health Organization) released the ICD10 international glossary of diseases, almost no doctor has enough time to manually associate the patient’s clinical history with international codes. The CodiEsp task at CLEF 2020 addressed this issue by proposing the development of an automatic system to deal with this task. Our solution investigated different machine learning strategies in order to identify an approach to face that challenge. The main outcomes of the experiments showed that a strategy based on BERT for pre-filtering and one based on BiLSTMCNN-SelfAttention for classification provide valuable results. We carried out several experiments on a subset of the training set for tuning the final model submitted to the challenge. In particular, we analyzed the impact of the algorithm, the input encoding strategy, and the thresholds for multi-label classification. A set of experiments has been carried out also during a post hoc analysis. The experiments confirmed that the strategy submitted to the CodiEsp task is the best performing one among those evaluated, and it allowed us to obtain a final mean average error value on the test set equal to 0.202. To support future developments of the proposed approach and the replicability of the experiments we decided to make the source code publicly accessible

    Extracting relations from Italian wikipedia using self-training

    Get PDF
    In this paper, we describe a supervised approach for extracting relations from Wikipedia. In particular, we exploit a self-training strategy for enriching a small number of manually labeled triples with new self-labeled examples. We integrate the supervised stage in WikiOIE, an existing framework for unsupervised extraction of relations from Wikipedia. We rely on WikiOIE and its unsupervised pipeline for extracting the initial set of unlabelled triples. An evaluation involving different algorithms and parameters proves that self-training helps to improve performance. Finally, we provide a dataset of about three million triples extracted from the Italian version of Wikipedia and perform a preliminary evaluation conducted on a sample dataset, obtaining promising results

    A domain-independent framework for building conversational recommender systems

    Get PDF
    Conversational Recommender Systems (CoRSs) implement a paradigm where users can interact with the system for defining their preferences and discovering items that best fit their needs. A CoRS can be straightforwardly implemented as a chatbot. Chatbots are becoming more and more popular for several applications like customer care, health care, medical diagnoses. In the most complex form, the implementation of a chatbot is a challenging task since it requires knowledge about natural language processing, human-computer interaction, and so on. In this paper, we propose a general framework for making easy the generation of conversational recommender systems. The framework, based on a content-based recommendation algorithm, is independent from the domain. Indeed, it allows to build a conversational recommender system with different interaction modes (natural language, buttons, hybrid) for any domain. The framework has been evaluated on two state-of-the-art datasets with the aim of identifying the components that mainly influence the final recommendation accuracy

    A personalized and context-aware news offer for mobile devices

    Get PDF
    For classical domains, such as movies, recommender systems have proven their usefulness. But recommending news is more challenging due to the short life span of news content and the demand for up-to-date recommendations. This paper presents a news recommendation service with a content-based algorithm that uses features of a search engine for content processing and indexing, and a collaborative filtering algorithm for serendipity. The extension towards a context-aware algorithm is made to assess the information value of context in a mobile environment through a user study. Analyzing interaction behavior and feedback of users on three recommendation approaches shows that interaction with the content is crucial input for user modeling. Context-aware recommendations using time and device type as context data outperform traditional recommendations with an accuracy gain dependent on the contextual situation. These findings demonstrate that the user experience of news services can be improved by a personalized context-aware news offer

    HPLC-HRMS Global Metabolomics Approach for the Diagnosis of “Olive Quick Decline Syndrome” Markers in Olive Trees Leaves

    Get PDF
    Olive quick decline syndrome (OQDS) is a multifactorial disease affecting olive plants. The onset of this economically devastating disease has been associated with a Gram-negative plant pathogen called Xylella fastidiosa (Xf). Liquid chromatography separation coupled to high-resolution mass spectrometry detection is one the most widely applied technologies in metabolomics, as it provides a blend of rapid, sensitive, and selective qualitative and quantitative analyses with the ability to identify metabolites. The purpose of this work is the development of a global metabolomics mass spectrometry assay able to identify OQDS molecular markers that could discriminate between healthy (HP) and infected (OP) olive tree leaves. Results obtained via multivariate analysis through an HPLC-ESI HRMS platform (LTQ-Orbitrap from Thermo Scientific) show a clear separation between HP and OP samples. Among the differentially expressed metabolites, 18 different organic compounds highly expressed in the OP group were annotated; results obtained by this metabolomic approach could be used as a fast and reliable method for the biochemical characterization of OQDS and to develop targeted MS approaches for OQDS detection by foliage analysis

    Galileo, the European GNSS program, and LAGEOS

    Get PDF
    With the ASI-INFN project “ETRUSCO-2 (Extra Terrestrial Ranging to Unified Satellite COnstellations-2)” we have the opportunity to continue and enhance the work already done with the former ETRUSCO INFN experiment. With ETRUSCO (2005-2010) the SCF LAB (Satellite/lunar laser ranging Characterization Facility LABoratory) team developed a new industry-standard test for laser retroreflectors characterization (the SCF-Test). This test is an integrated and concurrent thermal and optical measurement in accurately laboratory-simulated space environment. In the same period we had the opportunity to test several flight models of retroreflectors from NASA, ESA and ASI. Doing this we examined the detailed thermal behavior and the optical performance of LAGEOS (Laser GEOdynamics Satellites) cube corner retroreflectors and many others being used on the Global Navigation Satellite System (GNSS) constellations currently in orbit, mainly GPS, GLONASS and GIOVE-A/GIOVE-B (Galileo In Orbit Validation Element) satellites, which deploy old-generation aluminium back-coated reflectors; we also SCFTested for ESA prototype new-generation uncoated reflectors for the Galileo IOV (In-Orbit Validation) satellites, which is the most important result presented here. ETRUSCO-2 inherits all this work and a new lab with doubled instrumentation (cryostat, sun simulator, optical bench) inside a new, dedicated 85m2 class 10000 (or better) clean room. This new project aims at a new revision of the SCF-Test expressly conceived to dynamically simulate the actual GNSS typical orbital environment, a new, reliable Key Performance Indicator for the future GNSS retroreflectors payload. Following up on this and using LAGEOS as a reference standard target in terms of optical performances, the SCF LAB research team led by S. Dell’Agnello is designing, building and testing a new generation of GNSS retroreflectors array (GRA) for the new European GNSS constellation Galileo
    • …
    corecore